翻訳と辞書
Words near each other
・ Maximal evenness
・ Maximal function
・ Maximal ideal
・ Maximal independent set
・ Maximal information coefficient
・ Maximal lotteries
・ Maximal munch
・ Maximal pair
・ Maximal semilattice quotient
・ Maximal set
・ Maximal subgroup
・ Maximal torus
・ Maximal-ratio combining
・ Maximalism
・ Maximalist! (band)
Maximally informative dimensions
・ Maximally stable extremal regions
・ Maximaphily
・ Maximato
・ Maxime
・ Maxime (film)
・ Maxime Agueh
・ Maxime Alexandre
・ Maxime Annys
・ Maxime Arseneau
・ Maxime Authom
・ Maxime Baca
・ Maxime Bally
・ Maxime Barthelmé
・ Maxime Beaumont


Dictionary Lists
翻訳と辞書 辞書検索 [ 開発暫定版 ]
スポンサード リンク

Maximally informative dimensions : ウィキペディア英語版
Maximally informative dimensions
Maximally informative dimensions is a dimensionality reduction technique used in the statistical analyses of neural responses. Specifically, it is a way of projecting a stimulus onto a low-dimensional subspace so that as much information as possible about the stimulus is preserved in the neural response. It is motivated by the fact that natural stimuli are typically confined by their statistics to a lower-dimensional space than that spanned by white noise.〔D.J. Field. "Relations between the statistics of natural images and the response properties of cortical cells." J. Opt. Soc. am. A 4:2479-2394, 1987.〕 Within this subspace, however, stimulus-response functions may be either linear or nonlinear. The idea was originally developed by Tatyana Sharpee, Nicole Rust, and William Bialek in 2003.〔Sharpee, Tatyana, Nicole C. Rust, and William Bialek. "Maximally informative dimensions: analyzing neural responses to natural signals." Advances in Neural Information Processing Systems (2003): 277-284.〕
==Mathematical formulation==
Neural stimulus-response functions are typically given as the probability of a neuron generating an action potential, or spike, in response to a stimulus \mathbf. The goal of maximally informative dimensions is to find a small relevant subspace of the much larger stimulus space that accurately captures the salient features of \mathbf. Let D denote the dimensionality of the entire stimulus space and K denote the dimensionality of the relevant subspace, such that K \ll D. We let \ denote the basis of the relevant subspace, and \mathbf^K the projection of \mathbf onto \. Using Bayes' theorem we can write out the probability of a spike given a stimulus:
: P(spike|\mathbf^K) = P(spike)f(\mathbf^K)
where
: f(\mathbf^K) = \frac
is some nonlinear function of the projected stimulus.
In order to choose the optimal \, we compare the prior stimulus distribution P(\mathbf) with the spike-triggered stimulus distribution P(\mathbf|spike) using the Shannon information. The average information (averaged across all presented stimuli) per spike is given by
: I_ = \sum_|spike) log_2 ().〔N. Brenner, S. P. Strong, R. Koberle, W. Bialek, and R. R. de Ruyter van Steveninck. "Synergy in a neural code. Neural Comp., 12:1531-1552, 2000.〕
Now consider a K = 1 dimensional subspace defined by a single direction \mathbf. The average information conveyed by a single spike about the projection x = \mathbf \cdot \mathbf is
: I(\mathbf) = \int dx P_}(x|spike)/P_}(x|spike)/P_}(x|spike) = \langle \delta(x - \mathbf \cdot \mathbf) |spike \rangle_}(x) = \langle \delta(x - \mathbf \cdot \mathbf)\rangle_. Under this formulation, the relevant subspace of dimension K = 1 would be defined by the direction \mathbf that maximizes the average information I(\mathbf).
This procedure can readily be extended to a relevant subspace of dimension K > 1 by defining
: P_(\mathbf|spike) = \langle \prod_^K \delta(x_i - \mathbf \cdot \mathbf_i) |spike \rangle_^K}(\mathbf) = \langle \prod_^K \delta(x_i - \mathbf \cdot \mathbf_i) \rangle_^K}).

抄文引用元・出典: フリー百科事典『 ウィキペディア(Wikipedia)
ウィキペディアで「Maximally informative dimensions」の詳細全文を読む



スポンサード リンク
翻訳と辞書 : 翻訳のためのインターネットリソース

Copyright(C) kotoba.ne.jp 1997-2016. All Rights Reserved.